AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Quantized Text Generation

# Quantized Text Generation

Light R1 14B DS GGUF
Apache-2.0
Light-R1-14B-DS is a 14B-parameter quantized large language model supporting text generation tasks, designed for efficient inference in resource-constrained environments.
Large Language Model
L
qihoo360
2,784
9
Llama 3.2 3B Instruct Abliterated GGUF
MIT
An optimized quantized model where output and embedding tensors use f16 format, while other tensors use q5_k or q6_k format, resulting in a smaller size with performance comparable to pure f16.
Large Language Model English
L
ZeroWw
20
2
Meta Llama 3.1 8B Instruct Abliterated GGUF
MIT
A text generation model employing mixed quantization techniques, with output and embedding tensors in f16 format and other tensors quantized using q5_k or q6_k. It has a smaller size than the standard q8_0 quantization format while maintaining performance comparable to the pure f16 version.
Large Language Model English
M
ZeroWw
98
17
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase